Thumbs Up
Captures a 'thumbs up' hand gesture as user input whilst the user is stood back from the phone. This feature is expected to be used for confirming a user is ready to proceed.
Feature
We suggest auto-detecting which hand is raised as this reduces confusion for users. The slight caveat for this approach is if both hands are present QuickPose defaults to the right hand.
case thumbsUp() // auto detects raised hand, defaulting to right if both are raised
case thumbsUp(side: .left) // left hand only
case thumbsUp(side: .right) // right hand only
case thumbsUp(style: customStyled) // with custom style
Thumbs Up or Down
If you wish to explicitly capture a thumbs down state, the .thumbsUpOrDown()
can be substituted in the docs below
As above we suggest used auto for the hand detection.
case thumbsUpOrDown() // auto detects raised hand, defaulting to right if both are raised
case thumbsUpOrDown(side: .left) // left hand only
case thumbsUpOrDown(side: .right) // right hand only
case thumbsUpOrDown(style: customStyled) // with custom style
Basic Implementation
To show results you'll need to modify your view ZStack
, which is we assume is setup as described in the Getting Started Guide:
ZStack(alignment: .top) {
QuickPoseCameraView(useFrontCamera: true, delegate: quickPose)
QuickPoseOverlayView(overlayImage: $overlayImage)
}
The basic implementation will require displaying some text to the screen, start with declaring this value in your swiftui view.
@State private var feedbackText: String? = nil
And show this feedback text as an overlay to the view in your branding.
ZStack(alignment: .top) {
QuickPoseCameraView(useFrontCamera: true, delegate: quickPose)
QuickPoseOverlayView(overlayImage: $overlayImage)
}
.overlay(alignment: .center) {
if let feedbackText = feedbackText {
Text(feedbackText)
.font(.system(size: 26, weight: .semibold)).foregroundColor(.white).multilineTextAlignment(.center)
.padding(16)
.background(RoundedRectangle(cornerRadius: 8).foregroundColor(Color("AccentColor").opacity(0.8)))
.padding(.bottom, 40)
}
}
Note the above use of alignment in .overlay(alignment: .center)
, you can modify this to move the overlay around easily to say the bottom: .overlay(alignment: .bottom)
.
For this basic version it fills the feedback text with the Thumbs Up result as a percentage, and hides the text when the feature result is not available.
quickPose.start(features: [.thumbsUp()], onFrame: { status, image, features, feedback, landmarks in
switch status {
case .success:
overlayImage = image
if let result = features.values.first {
feedbackText = result.stringValue
} else {
feedbackText = nil
}
case .noPersonFound:
feedbackText = "Stand in view";
case .sdkValidationError:
feedbackText = "Be back soon";
}
})
Conditional Styling
To give user feedback consider using conditional styling so that when the user's measurement goes above a threshold, here 0.8, a green highlight is shown.
let greenHighlightStyle = QuickPose.Style(conditionalColors: [QuickPose.Style.ConditionalColor(min: 0.8, max: nil, color: UIColor.green)])
quickPose.start(features: [.thumbsUp(style: customOrConditionalStyle)],
onFrame: { status, image, features, feedback, landmarks in ...
})
Improving the Captured Results
The basic implementation above would likely capture an incorrect value, as in the real world users need time to understand what they are doing, change their mind, or QuickPose can simply get an incorrect value due to poor lighting or the user's stance. These issues are partially mitigated by on-screen feedback, but it's best to use an QuickPoseDoubleUnchangedDetector
to keep reading the values until the user has settled on a final answer.
To steady the .thumbsUp()
results declare a configurable Unchanged detector, which can be used to turn lots of our input features to read more reliably.
@State private var unchanged = QuickPoseDoubleUnchangedDetector(similarDuration: 2)
This will on trigger the callback block when the result has stayed the same for 2 seconds, the above has the default leniency, but this can be modified in the constructor.
@State private var unchanged = QuickPoseDoubleUnchangedDetector(similarDuration: 2, leniency: Double = 0.2) // changed to 20% leniency
The unchanged detector is added to your onFrame callback, and is updated every time a result is found, triggering its onChange
callback only when the result has not changed for the specified duration.
quickPose.start(features: [.thumbsUp()], onFrame: { status, image, features, feedback, landmarks in
switch status {
case .success:
overlayImage = image
if let result = features.values.first {
feedbackText = result.stringValue
unchanged.count(result: result.value) {
print("Final Result \(result.value)")
let isThumbsUp = result.value > 0.5
if isThumbsUp {
// your code to save result
} else {
// repeat the task agin
}
}
} else {
feedbackText = nil // blank if no hand detected
}
case .noPersonFound:
feedbackText = "Stand in view";
case .sdkValidationError:
feedbackText = "Be back soon";
}
})
Improving Guidance
Despite the improvements above, the user doesn't have clear instructions to know what to do, this can be fixed by adding user guidance.
Our recommended pattern is to use an enum
to capture all the states in your application.
enum ViewState: Equatable {
case intro
case measuring(score: Bool)
case completed(score: Bool)
case error(_ prompt: String)
var prompt: String? {
switch self {
case .intro:
return "Are you ready to continue?"
case .measuring(let score):
return score ? "Yes?" : "No?"
case .completed(let score):
return core ? "Yes\nLet's Continue" : "No\nLet's Wait"
case .error(let prompt):
return prompt
}
}
var features: [QuickPose.Feature]{
switch self {
case .intro, .measuring:
return [.thumbsUp()]
case .completed, .error:
return []
}
}
}
Alongside the states we also provide a prompt
text, which instructs the user at each step, and similarly the features
property instructs which features to pass to QuickPose, note for the completed state QuickPose doesn't process any features.
Declare this so your SwiftUI views can access it, starting in the .intro
state, our exmaple is simplifited to just demonstrate the pattern, as you would typically start with more positioning guidance.
@State private var state: ViewState = .intro
Next make some modifications, so that your feedbackText is pulled from the state prompt by default
.overlay(alignment: .center) {
if let feedbackText = state.prompt {
Text(feedbackText)
.font(.system(size: 26, weight: .semibold)).foregroundColor(.white).multilineTextAlignment(.center)
.padding(16)
.background(RoundedRectangle(cornerRadius: 8).foregroundColor(Color("AccentColor").opacity(0.8)))
.padding(.bottom, 40)
}
}
This now means you can remove the feedbackText
declaration:
//@State private var feedbackText: String? = nil // remove the feedbackText
There's two changes we need to make, first we need to update quickpose with the features for each state:
.onChange(of: state) { _ in
quickPose.update(features: state.features)
}
Then we should start quickpose from the state's features as well.
.onAppear {
quickPose.start(features: state.features, onFrame: { status, image, features, feedback, landmarks in
...
And in the onFrame
callback update the state instead of the feedbackText. This allows the UI input to change the view state in a controlled manner, so that for example the .intro
state can only be accessed if the user's hand is missing from the .measuring
state, or from the .error
state.
quickPose.start(features: state.features, onFrame: { status, image, features, feedback, landmarks in
switch status {
case .success:
overlayImage = image
if let result = features.values.first {
state = .measuring(score: result.value > 0.5)
unchanged.count(result: result.value) {
state = .completed(score: result.value > 0.5)
let isThumbsUp = result.value > 0.5
if isThumbsUp {
// your code to save result
} else {
// repeat the task agin
}
}
} else if case .measuring = state {
state = .intro
} else if case .error = state {
state = .intro
}
case .noPersonFound:
state = .error("Stand in view")
case .sdkValidationError:
state = .error("Be back soon")
}
})